Saeed Farzaneh; Mohammad Ali Sharifi; Amir Abdolmaleki; Masood Dehvari
Abstract
Extended AbstractIntroductionSatellites in geodesy receive and transport important information. Among those, satellites with Low Earth Orbit (LEO), which are at altitudes less than 1000 km, have a significant role in the advancement of geophysical sciences such as earth’s potential field. ...
Read More
Extended AbstractIntroductionSatellites in geodesy receive and transport important information. Among those, satellites with Low Earth Orbit (LEO), which are at altitudes less than 1000 km, have a significant role in the advancement of geophysical sciences such as earth’s potential field. Many parameters have an impact on the precision and accuracy of their information. Atmospheric friction is one of the most principal forces on satellites, which may cause deviation and falling of satellite on a short period. From the beginning of aerospace missions, many efforts have been done to determine atmospheric friction by geodesists, e.g., empirical models of atmosphere neutral density. Because of the complex nature of atmosphere behavior and also data limitations, these models may have low accuracy. So, there is a need for methods to improve the accuracy of empirical models by means of combining observations of atmospheric density to predict its future state. Materials & MethodsAlong with the extension of computer science, new reliable algorithms have been introduced which are able to predict a time series; Artificial Intelligent (AI) and Neural Networks (NN) are the best of these methods. These simple algorithms are inspirations of the human brain and its ability to learn and have been used in many different scientific fields. In these techniques without any requirement for constructing complex modeling, the relation between input and output will be provided only using weight and bias vectors during the training procedure. Simple Neural Networks are memoryless meaning that the value of time-series in previous can’t be used for predicting the future value of time series and therefore some important dependency of signal values with time will be lost. A Recurrent Neural Network (RNN) has been implemented to overcome this issue. RNN’s can store some important information of the values of the time series in the previous steps in a chain-like structure and using this information for predicting the next value of time series that will improve the accuracy of prediction. In this study, the Long Short-Term Memory (LSTM) Neural Network which is a kind of Recurrent Neural Network’s has been implemented to predict the scale for correcting atmospheric density of numerical models. The data of Grace Accelerometer observation in the 6 first month of the year 2014 have been used for training the LSTM for univariate training. Also, the LSTM has been trained in multi-variants mode once with using the coefficient of atmospheric correction expansion up to degree 2 and once with using sun geomagnetic information along with information of k_p index. Results & DiscussionAfter training the LSTM network, by using the estimated parameters of the model, the zero degrees coefficient of harmonic expansion for a scale factor of correcting atmospheric density has been predicted in periods of 7, 14, 30, 60, and 90 days. The results of the univariate model show that the lower RMSE (Root Mean Square Error) is obtained about 0.054 in the period of prediction of about 14 days. Also, the results show that the multi-variants model with input data of sun geomagnetic information and k_p index has lower RMSE values in considered prediction periods compared to the other modes and the lowest RMSE is about 0.03 and belongs to the prediction of about 7 days. For evaluation of LSTM parameters in the obtained results, the predictions have been implemented with various Window sizes. The results show that by increasing windows size, the RMSE of the prediction will be reduced and the lowest RMSE was for prediction of 7 days with a window size of about 90 days. For the purpose of more evaluation, with the predicted atmospheric densities correction coefficient, the orbit of GRACE satellites has been propagated and the calculated position and velocity of satellites have been compared with the real orbit data. The results show that the lower RMSE will be provided with the prediction of 7 days with an RMSE for position and velocity of about 50 meters and 0.15 m/s respectively. ConclusionIn this study, due to the complex nature of the atmosphere, the LSTM Neural Network has been used for modeling and predict the zero-order scale for correcting atmospheric densities harmonic expansion. For training the network, the data of Grace Satellites Accelerometer in the 180 days of the year 2014 have been used. The LSTM has been in univariate and multi-variant models. In the multi-variants model, once with using the coefficient of atmospheric correction expansion up to degree two and once with using sun geomagnetic information along with information of k_p index the network have been trained. The period of prediction was considered of about 7, 14, 30, 60, and 90 days.The results show that the LSTM is capable to predict the correction coefficient in considered periods with a mean RMSE of about 0.05 for zero-order degree. Also, the results show that the lowest RMSE was for the 7 and 14 days of prediction and by increasing the window size of LSTM the RMSE will be decreased. The results of calculating the position of GRACE satellites position and velocity using predicted correction coefficients with real data show that the lowest RMSE was for prediction of 7 days for implemented method.
Saeed Farzaneh; Mohammad Ali Sharifi; Seyedeh Samira Talebi
Abstract
Extended Abstract
Introduction
In recent years, the development of the country in the space industry and the ability of building, launching and infusion of satellites in the lower orbit has put the limited number of countries with such technology. In order to complete the entire cycle of the space ...
Read More
Extended Abstract
Introduction
In recent years, the development of the country in the space industry and the ability of building, launching and infusion of satellites in the lower orbit has put the limited number of countries with such technology. In order to complete the entire cycle of the space industry, the satellite navigation and control, which has been neglected since the beginning of the movement of space science in the country, has to be considered specially. The attitude determination in one sentence is the application of a variety of techniques for estimating the attitude of spacecrafts. In dynamic astronomy, the attitude determination is the the process of controlling the orientation of an aerospace vehicle with respect to an inertial frame of reference or another entity such as the celestial sphere, certain fields, and nearby objects, etc.
A spacecraft attitude determination and control system typically uses a variety of sensors and actuators. Because attitude is described by three or more attitude variables, the di®erence between the desired and measured states is slightly more complicated than for a thermostat, or even for the position of the satellite in space. Furthermore, the mathematical analysis of attitude
determination is complicated by the fact that attitude determination is necessarily either underdetermined or overdetermined.
Materials and methods
Attitude determination typically requires finding three independent quantities, such as any minimal parameterization of the attitude matrix. The mathematics behind attitude determination can be broadly characterized into approaches that use stochastic analysis and approaches that do not. This paper considers a computationally efficient algorithm to optimally estimate the spacecraft attitude from vector observations taken at a single time, which is known as single-point or single-frame attitude determination method. There have been a number of attitude determination algorithms that compute optimal attitude of a spacecraft from various observation sources (known as the Wahba’s problem), and each of the methods has advantages and limitations in terms of accuracy and computational speed. The most popular are: the very important ˆq-Method, the most popular TRIAD and QUEST, SVD, FOAM, and ESOQ-1, the fastest ESOQ-2, and many others approaches introducing new insights or different characteristics, for instance, the EAA, Euler-2, Euler-ˆq, and OLAE.
Results and discussion
Since star detection algorithms can provide more than two stars, the star detector field of view often consists of two or more stars that are passed through the identification algorithms will be detected, those star vectors that have measurement errors can be compensated by using more than two stars. Methods such as the QUEST algorithm usually optimize an error function to the minimum optimal. In fact, the QUEST algorithm estimates the optimum specific eigenvalue and vector for the problem described in the Q_method method without the need for complex numerical calculations. The fact that the QUEST algorithm retains all the computational advantages of a fast definitive algorithm while maintaining the desired result efficiency underscores why it is typically used.
Conclusion
Simulation results showed that the traid and quest algorithms with shuster method attitude determination algorithm can be an efficient alternative over the eight tested algorithm in terms of computational efficiency for singularity-free attitude representation.
Zahra Banimostafavi; Saeed Farzaneh; Mohammad Ali Sharifi
Abstract
Extended Abstract:
Introduction
Nowadays, engineering structures face many threats. Natural and human activities can result in deformation and displacement of dams, bridges, and towers. As a result, any crack in the body of these structures is important and may have dangerous consequences. To prevent ...
Read More
Extended Abstract:
Introduction
Nowadays, engineering structures face many threats. Natural and human activities can result in deformation and displacement of dams, bridges, and towers. As a result, any crack in the body of these structures is important and may have dangerous consequences. To prevent catastrophes, the behavior of these structures should be monitored permanently during the construction phase and after opening.Nowadays,thebehavior of engineering structures such as dams, power plants, and towers is considered to be especially important. Three different methods are usually used to measure such behavior: classical, satellite and precise instruments.
Materials and methods
Modern equipment is considered to be a crucial factor in controllingpossible changes and preventing human errors. Therefore, different sensors are installed in the structure to measure tensile and shear flexibility during the construction phase. Moreover, data received from these sensors is analyzed permanently during the service life to ensure sustainability of the structure. These tools make internal analysis of these structures possible. Analyzing the behavior of engineering structures is considered to be one of the most important tasks in the field of geodesy. Inaccurate analysis of displacements can have deadly effects. Various methods are used to measure such displacements, which are divided into two categories: robust and non-robust methods based upon the results of the epoch adjustment. To find deformations, a geodetic network should be defined in the first step. If two epochs are not measured in the same datum, the results will not be reliable. Displacement can be measured in two ways: Absolute and Relative. In the absolute method, some points are considered to be stable, while in the relative network, all points are considered to be unstable, and the problem is solved based upon this hypothesis. The method of relative network is used in the present study. Regarding network geometry, displacement analysis is performed using two methods:single and combinatorial. Moreover, displacement analysis is divided into two categories of robust and non-robust methods. Iterative Weighted Similarity Transformation (IWST)and Minimum L1 norm are among robust methods which calculate the matrix of displacement by minimizing the first and second norm. Global Congruency Test (GCT) is a non-robust statistical method used to determine unstable points in geodetic networks. Robust and GCT are among classical methods used to discover unstable points in geodetic networks, while Simultaneous Adjustment of Two Epoch (SATE(is a new method used to achieve this purpose. Combinatorial methods are also considered to be a suitable alternative method used for detecting unstable points in a geodetic network. In our previous study, “evaluation of single-point methods used fordetecting displacement in classical geodetic networks”, single-point methods of detecting unstable points were investigated and the SATE method was selected as the optimal method. Unlike single-point methods, these methods examine all points of the geodetic network simultaneously to discover unstable points.
Results and discussion
The strong dependence of these methods on the network geometry makes discovery of all unstable points impossible. Combinatorial methods are considered to be a suitable alternative method used to detect all unstable points in the geodetic network. These methods does not have a strong dependence on scale and the network geometry. Multiple Sub Sample and M-split methods are classified in this category. These methods can detect unstable points efficiently. The present study takes advantage of simulated datato evaluate combinatorial methods such as Multiple Sub Sample (MSS) Angles, MSS-distance difference, and M-split and compare them with the SATE method with the aim of choosing the optimal method. Then, unstable points in the real network of Jamishan dam in Kermanshah Province will be discovered using the identified optimal method.
Conclusion
The present study identifies the best method between single and combinatorial methods. The best method can detect most unstable points and has the lowest dependence on geometry, scale and other factors influencing the results.According to the results, Multiple Sub Sample with distance difference is selected as the best method.
Mohammad Ali Sharifi; Abbas Bahroudi; Saleh Mafi
Abstract
Extended Abstract Introduction Attitude determination of the fault planes and slid movements occurring on these planes are among the topics of interest to geoscientists. Among the methods that have been introduced to determine the attitude of the fault planes so far, the use of geological tools for justifying ...
Read More
Extended Abstract Introduction Attitude determination of the fault planes and slid movements occurring on these planes are among the topics of interest to geoscientists. Among the methods that have been introduced to determine the attitude of the fault planes so far, the use of geological tools for justifying the geometry of the faults with surface outcrops, and examining the changes of the stress field and the displacements appeared on the Earth’s surface can be mentioned. The slip rate is calculated using the displacement of the sedimentary rock layers relative to the displacement time and the simulation models. Materials & Methods In this research, a geometric method is presented to calculate the slip rate of Zagros faults. We consider each fault as a continuous set of fault fragments whose surface positions are known. Given that most of the Zagros faults are hidden, locating thefaults is carried out using the geologicalmap of Iran’s faults. The first issue in performing these calculationsisto determine the attitude of the fault planes in the Zagros seismogenic layer. The seismogenic layer is that part of the earth's crust whose deformation is elastic, and the major fractures caused by the earthquakes occur in this part. In order to determine the attitude of the fault’sfocal plane, we use the focal coordinates of the earthquakes occurringaround each fault segment. In performing these calculations, the focal locations of the earthquakes are transferred to the geodetic coordinate system and, the equation of the fault plane is calculated using the least squares method in the Cartesian coordinate system. One can obtain the azimuth of the strike of the planes relative to the astronomical north by calculating the coefficients of the fault planes. To determine the azimuth, we first obtain the unit vector of the strike line by cross product of the geodetic z-axis (normal vector of the horizontal plane) and the normal vector of the fault plane. The fault plane azimuth will then be the angle between the strike line vector and the north vector.The north vector is the vector which is determined by connecting the point located on the center of each faultfragment to the intersection point of the horizontal plane and thez-axis. Variation in dynamic mechanisms of thefaults in the region creates fractures with different directions on the ground. We obtain theslip angle (rake) of the fault from the difference of the fault direction and the direction of thesurface fracture and the type of thefault (strike-slip, dip slip and oblique).By calculating the slip angle, we now calculate the unit vector of the slip direction from the rotation ofthe strike line vector as much asthe rake angle. Results & Discussion In order to calculate the slip rate of each fault, we consider Zagros crust as an integrated object, which deforms uniformly by imposing the stress. Based on this assumption, we project the velocity vectors of the Zagros geodynamic network on the fault planes and calculate the slip rates using the slip direction vectors. It should be noted that the velocity vectors of the geodynamic network have been defined in the navigation coordinate system. According to the definition of the fault plane equations, it is necessary to transfer the velocity vectors to the geodetic coordinate system. The resulting slip rate is a parameter which is calculated for each fault fragmentindividually. Considering the effect of the systematic errors in the focalposition of theearthquakes, (including the error of the focal depth and the epicenter location), the slip ratesobtained for the fault fragmentsalways have systematic errors. Therefore, we define an average slip rate for each fault in order toreduce the error effect. In this study, velocity vectors of seventeen permanent stations of the Zagros geodynamic network provided by the National Cartographic Center (NCC) are used. The focal positions of the earthquakes are also published by the International Institute of Earthquake Engineering and Seismology (IIEES). Conclusion The obtained results showed that the regions with high fault slip rate usually have dense earthquakes. In addition, the seismicity potential of any region can be found by comparing the slip rate of each fault and the density of its earthquakes in the region. According to the changes in the slip rate obtained in Zagros, faults in the western part of Zagros, especially in Ilam province, have low slip rates. However, the province is considered asone of the seismic areas of the state in terms of earthquake density.It means that most of the slip movements occurringon the faults of the western region have been accompanied by vibration.
Farideh Sabzehee; Mohammad Ali Sharifi; Mehdi Akhoondzadeh hanzaee
Abstract
Extended Abstract
Electrondensity is one of the significant parameters for monitoring and describing the ionosphere.The ionosphere is a consequential source of errors for the GPS signals that traverse through the ionosphere on their ways to the ground-based receivers, because there is a high concentration ...
Read More
Extended Abstract
Electrondensity is one of the significant parameters for monitoring and describing the ionosphere.The ionosphere is a consequential source of errors for the GPS signals that traverse through the ionosphere on their ways to the ground-based receivers, because there is a high concentration of free electrons and ionsreleased by the ionizingaction of solar X-ray and ultraviolet radiation on atmospheric formers. Radio Occultation(RO) is one of the most modern satellite techniques to study on vertical profiles of neutral density, temperature, pressure and water vapor in the stratosphere and troposphere and ionospheric electron density profiles with high vertical resolutions.Since the RO technique using the GPS signals was employed for the first time by the Global Positioning System Meteorology (GPS/MET), the low-earth-orbit-based GPS RO technique has been proven as a successful method in exploring the earth’s lower atmosphere and ionosphere.
Abel transformation is the basic hypothesis made in the retrieval of radio-occulted ionospheric parameters.The Abel inversion is a powerful tool to retrieve high-resolution vertical profiles of electron density from GPS radio occultation collected by satellites into Low Earth Orbit(LEO).
COSMIC satellite records measurements during the whole day and is not limited to the specific times and special atmospheric conditions.It should be noted that the GPS radio occultation techniques provide continuous and useful ionospheric layers information and are not obtained from the point wise measurements by other satellites.
COSMIC satellite also records the altitude for the measurements of the electron density profile. COSMIC satellite provides more than1000 electron density profiles per day with approximately global coverage and also parts of them cover IRAN .In this approach, the LEO-GPS line of sight is occulted by the Earth’s limb with the setting(or rising) motion of the LEO satellite. The GPS-LEO radio connection successively records the atmospheric layers at different altitudes. The ionosphere is highly variable in space and time. Thus, for modeling the electrondensity profile the time changes(diurnaland seasonal) and location changes(geographical position of station), must be considered. In this research, the input space includes the day number (seasonal variation), hour (diurnal variation), latitude, longitude, height and F10.7 index (measure of the solar activity). The output of the model is the ionospheric electron density profile(Ne).The COSMIC observations and IRI-2007-based data of electron density profiles were also analyzed during the solar minimum period. In this research, we used a feedforward Artificial Neural Network (ANN) with 55 neurons in hidden layer for modeling profiles of electron density of COSMIC satellite performance of the ANN models was evaluated using correlation coefficient (R=92%),R-Squared(0.83). It was found that the ANN model could be applied successfully in estimating the electron density profiles retrieved from the FORMOSAT-3/COSMIC.The comparison of the IRI model electron density profile with the COSMIC RO measurements during each month of the year 2007 over IRAN is performed.The electron density profile from all three International Reference Ionosphere (IRI) models, namely IRI-NEQ,IRI-2001, and IRI-01-Corr are used.
The results showed that the results of the IRI2007 model electron density is not satisfactory over IRAN and ANN model electron density profile is in very good agreement with COSMIC RO measurements. It was concluded that IRI-NEQ model is more appropriate thanthe other two models.
The results showed that the differences between the modeled profile electron density and theobserved profile electron density are very lower than the differences between the IRI-2007 models.Maximum changes occurred in January and December at analtitude of about 450 km and minimum changes were recorded in November at the height of 250 Km and in April at the height of 450 Km. The differences also decreased in the summer at higher altitudes and in winter at lower altitudes.